On iterative methods for linearly constrained entropy maximization
نویسندگان
چکیده
منابع مشابه
Pattern Search Methods for Linearly Constrained Minimization
We extend pattern search methods to linearly constrained minimization. We develop a general class of feasible point pattern search algorithms and prove global convergence to a KarushKuhn-Tucker point. As in the case of unconstrained minimization, pattern search methods for linearly constrained problems accomplish this without explicit recourse to the gradient or the directional derivative of th...
متن کاملSpectral gradient methods for linearly constrained optimization
Linearly constrained optimization problems with simple bounds are considered in the present work. First, a preconditioned spectral gradient method is defined for the case in which no simple bounds are present. This algorithm can be viewed as a quasiNewton method in which the approximate Hessians satisfy a weak secant equation. The spectral choice of steplength is embedded into the Hessian appro...
متن کاملRescaled proximal methods for linearly constrained convex problems
We present an inexact interior point proximal method to solve linearly constrained convex problems. In fact, we derive a primal-dual algorithm to solve the KKT conditions of the optimization problem using a modified version of the rescaled proximal method. We also present a pure primal method. The proposed proximal method has as distinctive feature the possibility of allowing inexact inner step...
متن کاملImplementing Generating Set Search Methods for Linearly Constrained Minimization
We discuss an implementation of a derivative-free generating set search method for linearly constrained minimization with no assumption of nondegeneracy placed on the constraints. The convergence guarantees for generating set search methods require that the set of search directions possesses certain geometrical properties that allow it to approximate the feasible region near the current iterate...
متن کاملNewton-type methods for unconstrained and linearly constrained optimization
This paper describes two numerically stable methods for unconstrained optimization and their generalization when linear inequality constraints are added. The difference between the two methods is simply that one requires the Hessian matrix explicitly and the other does not. The methods are intimately based on the recurrence of matrix factorizations and are linked to earlier work on quasi-Newton...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Banach Center Publications
سال: 1990
ISSN: 0137-6934,1730-6299
DOI: 10.4064/-24-1-145-163